Skip to content

Conversation

@njzjz
Copy link
Member

@njzjz njzjz commented Jan 10, 2026

Summary by CodeRabbit

  • Refactor
    • Updated learning rate scheduling implementation with enhanced type annotations and more flexible parameter handling for improved code clarity.

✏️ Tip: You can customize this high-level summary in your review settings.

Copilot AI review requested due to automatic review settings January 10, 2026 16:54
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 10, 2026

📝 Walkthrough

Walkthrough

The changes refactor the learning rate module in the training pipeline by replacing an import from deepmd.pd.utils.learning_rate (LearningRateExp) with an import from deepmd.dpmodel.utils.learning_rate (BaseLR). The inner get_lr function signature is updated with explicit type hints for parameters and return types to reflect the new learning rate base class.

Changes

Cohort / File(s) Summary
Learning Rate Module Refactoring
deepmd/pd/train/training.py
Replaced LearningRateExp import with BaseLR import from different module. Updated get_lr function signature with explicit type hints (lr_params: dict[str, Any]BaseLR). Modified internal construction to use BaseLR(**lr_params) instead of LearningRateExp(**lr_params).

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 33.33% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'chore(pd): sync get_lr from pt to pd' accurately describes the main change—synchronizing the get_lr function implementation from the PyTorch (pt) module to the pandas (pd) module by replacing LearningRateExp with BaseLR.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
deepmd/pd/train/training.py (1)

241-245: Avoid mutating config via in-place updates to lr_params.
Make a shallow copy before injecting stop_steps to prevent side effects if the dict is reused/logged elsewhere.

Proposed diff
 def get_lr(lr_params: dict[str, Any]) -> BaseLR:
-    lr_params["stop_steps"] = self.num_steps - self.warmup_steps
-    lr_schedule = BaseLR(**lr_params)
-    return lr_schedule
+    lr_params = dict(lr_params)  # avoid mutating config in-place
+    lr_params["stop_steps"] = self.num_steps - self.warmup_steps
+    return BaseLR(**lr_params)
📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9b1df92 and bf21f31.

📒 Files selected for processing (1)
  • deepmd/pd/train/training.py
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: njzjz
Repo: deepmodeling/deepmd-kit PR: 4219
File: deepmd/utils/learning_rate.py:48-53
Timestamp: 2024-10-15T22:22:24.889Z
Learning: Methods in `deepmd/utils/learning_rate.py` that return NumPy scalar types should have return type annotations using the corresponding NumPy types, such as `np.float64`.
🧬 Code graph analysis (1)
deepmd/pd/train/training.py (2)
deepmd/dpmodel/utils/learning_rate.py (1)
  • BaseLR (21-49)
deepmd/pt/train/training.py (1)
  • get_lr (269-272)
🔇 Additional comments (1)
deepmd/pd/train/training.py (1)

33-35: Compatibility check: BaseLR(**lr_params) may require a type/variant key that older PD configs didn’t provide.
If PD configs previously relied on an implicit LearningRateExp, please confirm the PD config schema/examples always include the discriminator BaseLR.__new__ uses to pick a concrete schedule (otherwise this becomes a runtime failure).

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR synchronizes the get_lr function implementation in the PaddlePaddle (pd) backend with the PyTorch (pt) implementation, enabling support for multiple learning rate schedulers beyond just the exponential decay type.

Changes:

  • Updated get_lr function to use the plugin-based BaseLR class instead of directly instantiating LearningRateExp
  • Added type hints to the function signature
  • Removed the assertion limiting learning rate types to only "exp"

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@codecov
Copy link

codecov bot commented Jan 10, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 81.93%. Comparing base (9b1df92) to head (bf21f31).
⚠️ Report is 3 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5144      +/-   ##
==========================================
- Coverage   81.93%   81.93%   -0.01%     
==========================================
  Files         712      712              
  Lines       72895    72894       -1     
  Branches     3616     3616              
==========================================
- Hits        59729    59727       -2     
- Misses      12001    12004       +3     
+ Partials     1165     1163       -2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@njzjz njzjz requested a review from iProzd January 10, 2026 18:19
@iProzd iProzd enabled auto-merge January 12, 2026 03:26
@iProzd iProzd added this pull request to the merge queue Jan 12, 2026
Merged via the queue into deepmodeling:master with commit 0a807cf Jan 12, 2026
70 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants